Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
High-efficiency AI
# High-efficiency AI
Google Gemma 2b AWQ 4bit Smashed
A 4-bit quantized version of the google/gemma-2b model compressed using AWQ technology, designed to enhance inference efficiency and reduce resource consumption.
Large Language Model
Transformers
G
PrunaAI
33
1
Featured Recommended AI Models
Empowering the Future, Your AI Solution Knowledge Base
English
简体中文
繁體中文
にほんご
© 2025
AIbase